35 research outputs found

    Impact of COVID-19 on cardiovascular testing in the United States versus the rest of the world

    Get PDF
    Objectives: This study sought to quantify and compare the decline in volumes of cardiovascular procedures between the United States and non-US institutions during the early phase of the coronavirus disease-2019 (COVID-19) pandemic. Background: The COVID-19 pandemic has disrupted the care of many non-COVID-19 illnesses. Reductions in diagnostic cardiovascular testing around the world have led to concerns over the implications of reduced testing for cardiovascular disease (CVD) morbidity and mortality. Methods: Data were submitted to the INCAPS-COVID (International Atomic Energy Agency Non-Invasive Cardiology Protocols Study of COVID-19), a multinational registry comprising 909 institutions in 108 countries (including 155 facilities in 40 U.S. states), assessing the impact of the COVID-19 pandemic on volumes of diagnostic cardiovascular procedures. Data were obtained for April 2020 and compared with volumes of baseline procedures from March 2019. We compared laboratory characteristics, practices, and procedure volumes between U.S. and non-U.S. facilities and between U.S. geographic regions and identified factors associated with volume reduction in the United States. Results: Reductions in the volumes of procedures in the United States were similar to those in non-U.S. facilities (68% vs. 63%, respectively; p = 0.237), although U.S. facilities reported greater reductions in invasive coronary angiography (69% vs. 53%, respectively; p < 0.001). Significantly more U.S. facilities reported increased use of telehealth and patient screening measures than non-U.S. facilities, such as temperature checks, symptom screenings, and COVID-19 testing. Reductions in volumes of procedures differed between U.S. regions, with larger declines observed in the Northeast (76%) and Midwest (74%) than in the South (62%) and West (44%). Prevalence of COVID-19, staff redeployments, outpatient centers, and urban centers were associated with greater reductions in volume in U.S. facilities in a multivariable analysis. Conclusions: We observed marked reductions in U.S. cardiovascular testing in the early phase of the pandemic and significant variability between U.S. regions. The association between reductions of volumes and COVID-19 prevalence in the United States highlighted the need for proactive efforts to maintain access to cardiovascular testing in areas most affected by outbreaks of COVID-19 infection

    ANIMAL MODELS FOR THE STUDY OF LEISHMANIASIS IMMUNOLOGY

    Get PDF
    Leishmaniasis remains a major public health problem worldwide and is classified as Category I by the TDR/WHO, mainly due to the absence of control. Many experimental models like rodents, dogs and monkeys have been developed, each with specific features, in order to characterize the immune response to Leishmania species, but none reproduces the pathology observed in human disease. Conflicting data may arise in part because different parasite strains or species are being examined, different tissue targets (mice footpad, ear, or base of tail) are being infected, and different numbers (“low” 1×102 and “high” 1×106) of metacyclic promastigotes have been inoculated. Recently, new approaches have been proposed to provide more meaningful data regarding the host response and pathogenesis that parallels human disease. The use of sand fly saliva and low numbers of parasites in experimental infections has led to mimic natural transmission and find new molecules and immune mechanisms which should be considered when designing vaccines and control strategies. Moreover, the use of wild rodents as experimental models has been proposed as a good alternative for studying the host-pathogen relationships and for testing candidate vaccines. To date, using natural reservoirs to study Leishmania infection has been challenging because immunologic reagents for use in wild rodents are lacking. This review discusses the principal immunological findings against Leishmania infection in different animal models highlighting the importance of using experimental conditions similar to natural transmission and reservoir species as experimental models to study the immunopathology of the disease

    Oropharyngeal primary tumor segmentation for radiotherapy planning on magnetic resonance imaging using deep learning

    No full text
    Background and purpose: Segmentation of oropharyngeal squamous cell carcinoma (OPSCC) is needed for radiotherapy planning. We aimed to segment the primary tumor for OPSCC on MRI using convolutional neural networks (CNNs). We investigated the effect of multiple MRI sequences as input and we proposed a semi-automatic approach for tumor segmentation that is expected to save time in the clinic. Materials and methods: We included 171 OPSCC patients retrospectively from 2010 until 2015. For all patients the following MRI sequences were available: T1-weighted, T2-weighted and 3D T1-weighted after gadolinium injection. We trained a 3D UNet using the entire images and images with reduced context, considering only information within clipboxes around the tumor. We compared the performance using different combinations of MRI sequences as input. Finally, a semi-automatic approach by two human observers defining clipboxes around the tumor was tested. Segmentation performance was measured with Sþrensen–Dice coefficient (Dice), 95th Hausdorff distance (HD) and Mean Surface Distance (MSD). Results: The 3D UNet trained with full context and all sequences as input yielded a median Dice of 0.55, HD of 8.7 mm and MSD of 2.7 mm. Combining all MRI sequences was better than using single sequences. The semi-automatic approach with all sequences as input yielded significantly better performance (p < 0.001): a median Dice of 0.74, HD of 4.6 mm and MSD of 1.2 mm. Conclusion: Reducing the amount of context around the tumor and combining multiple MRI sequences improved the segmentation performance. A semi-automatic approach was accurate and clinically feasible

    Strategies for tackling the class imbalance problem of oropharyngeal primary tumor segmentation on magnetic resonance imaging

    No full text
    Background and purpose: Contouring oropharyngeal primary tumors in radiotherapy is currently done manually which is time-consuming. Autocontouring techniques based on deep learning methods are a desirable alternative, but these methods can render suboptimal results when the structure to segment is considerably smaller than the rest of the image. The purpose of this work was to investigate different strategies to tackle the class imbalance problem in this tumor site. Materials and methods: A cohort of 230 oropharyngeal cancer patients treated between 2010 and 2018 was retrospectively collected. The following magnetic resonance imaging (MRI) sequences were available: T1-weighted, T2-weighted, 3D T1-weighted after gadolinium injection. Two strategies to tackle the class imbalance problem were studied: training with different loss functions (namely: Dice loss, Generalized Dice loss, Focal Tversky loss and Unified Focal loss) and implementing a two-stage approach (i.e. splitting the task in detection and segmentation). Segmentation performance was measured with Sþrensen–Dice coefficient (Dice), 95th Hausdorff distance (HD) and Mean Surface Distance (MSD). Results: The network trained with the Generalized Dice Loss yielded a median Dice of 0.54, median 95th HD of 10.6 mm and median MSD of 2.4 mm but no significant differences were observed among the different loss functions (p-value > 0.7). The two-stage approach resulted in a median Dice of 0.64, median HD of 8.7 mm and median MSD of 2.1 mm, significantly outperforming the end-to-end 3D U-Net (p-value < 0.05). Conclusion: No significant differences were observed when training with different loss functions. The two-stage approach outperformed the end-to-end 3D U-Net
    corecore